Meilisearch 1.14
Meilisearch 1.14 introduces new experimental features, including composite embedders and an embedding cache to boost performance. It also adds core features such as granular filterable attributes and batch document retrieval by ID.

We're excited to announce the release of Meilisearch v1.14. In this article, we'll highlight the key changes and improvements in this release.
For a complete list of all updates and fixes, please visit the changelog on GitHub.
These powerful new features are coming soon to Meilisearch Cloud. Sign up now to be among the first to experience the latest improvements!
New: granular filterable attributes settings
With Meilisearch v1.14, you can configure filterable attributes using an advanced object format that lets you specify exactly which filtering features to enable for each attribute pattern:
{ "filterableAttributes": [ { "attributePatterns": ["genre", "artist"], "features": { "facetSearch": true, "filter": {"equality": true, "comparison": false } } }, { "attributePatterns": ["*rank"], "features": { "facetSearch": false, "filter": {"equality": true, "comparison": true } } }, { "attributePatterns": ["albumId"], "features": { "facetSearch": false, "filter": {"equality": true, "comparison": false } } }, ] }
- String attributes (e.g.,
genre
,artist
): enable facet search and equality operators (=
/!=
) for categorical filtering, while disabling comparison operators (>
,<
,>=
,<=
), which are unnecessary for string values. - Numeric attributes (e.g.,
*rank
): enable comparison operators (>
,<
,>=
,<=
) for range filtering, while disabling facet search, as it's not relevant for numerical data. - Unique identifiers (e.g.,
albumId
): enable equality operators for exact matching, while disabling unused features such as comparison operators and facet search.
This targeted configuration can notably boost indexing performance.
Experimental: composite embedders
Meilisearch 1.14 introduces a new experimental feature that lets you use different embedders at search and indexing time, allowing you to optimize for both throughput and latency.
- Use a remote embedder for indexing in bulk, as remote embedders provide the highest bandwidth (embeddings/s)
- Use a local embedder for answering search queries, as local embedders provide the lowest latency (time to first embedding)
Configure your embedders using the new composite
source type. For instance, you can combine a local Hugging Face model for fast search with a remote inference endpoint for efficient indexing:
{ "embedders": { "text": { "source": "composite", "searchEmbedder": { "source": "huggingFace", // locally computed embeddings using a model from the Hugging Face Hub "model": "baai/bge-base-en-v1.5", "revision": "a5beb1e3e68b9ab74eb54cfd186867f64f240e1a" }, "indexingEmbedder": { "source": "rest", // remotely computed embeddings using Hugging Face inference endpoints "url": "https://URL.endpoints.huggingface.cloud", "apiKey": "hf_XXXXXXX", "documentTemplate": "Your {{doc.template}}", "request": { "inputs": [ "{{text}}", "{{..}}" ] }, "response": [ "{{embedding}}", "{{..}}" ] } } } }
Meilisearch automatically selects the appropriate embedder based on the current operation.
Activate this feature in your project overview by checking the "Composite embedders" box under "Experimental features". If you are self-hosting Meilisearch, enable it via the /experimental-features
route.
Experimental: cache embeddings
Meilisearch 1.14 brings a new experimental feature that allows you to cache search query embeddings, significantly improving performance when the same query is run multiple times.
To enable the search query embedding cache, launch Meilisearch with either instance option specifying the maximum number of entries to store in the cache:
- the
--experimental-embedding-cache-entries=150
flag - the
MEILI_EXPERIMENTAL_EMBEDDING_CACHE_ENTRIES=150
environment variable
When enabled, Meilisearch stores the embedding vectors for search queries, eliminating the need to repeatedly generate embeddings for identical queries. This is particularly valuable in scenarios where:
- The same queries are frequently repeated
- You're using multi-search across multiple indexes
- You've implemented local sharding where identical queries are sent to different indexes
New: get documents by id
You can now get a set of documents by their primary keys:
// POST /indexes/INDEX_UID/documents/fetch { "ids": [785084, 44214, 473], }
The above query will return the corresponding documents:
{ "results": [ { "id": 44214, "title": "Black Swan" }, { "id": 473, "title": "Pi" }, { "id": 785084, "title": "The Whale" } ], "offset": 0, "limit": 20, "total": 3 }
Contributors shout-out
We want to give a massive thank you to the contributors who made this release possible. Special thanks to @MichaScant for their work on Meilisearch, @oXtxNt9U for contributions to Heed, @ptondereau for efforts on Arroy, and both @NarHakobyan and @mosuka for their contributions to Charabia. We’re deeply grateful for your support and collaboration.
And that’s a wrap for v1.14! These release notes only highlight the most significant updates. For an exhaustive listing, read the changelog on GitHub.
For more information, subscribe to our monthly newsletter, or join our Product Discussions.
For anything else, join our developers community on Discord.